Light-Field Imaging 101
A comprehensive introduction to unfocused light-field imaging (plenoptic 1.0)
1 Intended Audience
This guide aims to expand upon existing unfocused light-field (plenoptic 1.0) imaging learning material by providing detailed Python implementations of light-field image processing along with examples. The light-field image processing workflow closely follows that of the MATLAB-based Light-Field Imaging Toolkit [1], but leverages Python’s open-source ecosystem and Quarto’s enhanced capabilities for visualization and LaTeX formula integration.
By focusing specifically on the image processing aspects, this guide helps readers better understand the technical limitations and practical considerations of light-field imaging techniques. This guide assumes the reader is familiar with unfocused light-field imaging. Such an understanding can be achieved by browsing plenoptic.info or reading the unfocused light-field part of the Light-Field Camera Working Principles chapter (Pages 11-25) of the book Development and Application of Light-Field Cameras in Fluid Measurements [2].
2 Preprocessing
2.1 Raw Images
To illustrate light-field image processing concepts, a synthetic unfocused light-field image was generated and is shown in Figure 1. The image describes the raw pixel intensities as recorded by an unfocused light-field camera. Every lenslet of the camera projects the incident radiance into a defocused intensity distribution pattern. This manifests as a spatially constrained blur where the light energy from each microlens is distributed across multiple sensor pixels rather than being concentrated at the expected conjugate position. The image was generated using a commercial ray tracing software (OpticStudio, ANSYS).
2.2 Microlens Array Structure
The unfocused light-field camera model implemented in this investigation utilizes a hexagonal microlens array configuration for two primary reasons. First, hexagonal microlens arrays provide optimal spatial packing efficiency, thereby maximizing the effective sensor area utilization. Second, the hexagonal pattern introduces additional computational complexity in light-field image processing algorithms—specifically in sampling pattern interpolation—that merits thorough examination within this methodological framework.
The hexagonal microlens array has 127 × 127 complete lenslets. This core array is supplemented with partial lenslets along the periphery to achieve an overall rectangular shape. The total number of elements in the array is 16 447, which corresponds to the expected number of spots in the calibration image.
2.3 Calibration
The calibration process seeks to pinpoint the centroid of each lenslet’s corresponding sensor region. A calibration image is acquired with the main lens’s aperture reduced to a minimum. The resulting calibration image consist of an array of small bright spots as shown in Figure 2.
The centroid identification process based on intensity peaks in an image is a standard image processing technique widely documented in the literature, not exclusive to light-field imaging. Since the synthetic calibration image contains no sensor noise, the calibration procedure has been intentionally simplified to emphasize conceptual clarity and streamline the explanation.
The spot centroid detection procedure consists of three main steps. First, a manual threshold is applied to the calibration image to identify potential spot locations. Second, a binary dilation operation using a disk-shaped structuring element is performed to expand these areas, ensuring that the subsequent weighted centroid calculation encompasses the full spot and its immediate surroundings. Finally, the scikit-image regionprops function is used to calculate the intensity-weighted centroids of each spot.
Code
import numpy as np
from skimage import morphology
from skimage.morphology import disk
from skimage.measure import label, regionprops
# Load calibration image
calibration_image = tifffile.imread('127x127_ref.tif')
# Apply hard-coded threshold (chosen to match the expected number of centroids)
binary_calibration = calibration_image>8
# Perform a binary dilation to ensure coverage of the spots for the computation
# of the weighted centroid
dilated_calibration = morphology.binary_dilation(binary_calibration, disk(3))
# Create labels
labeled_calibration = label(dilated_calibration)
# Apply regionprops
regions_calibration = regionprops(labeled_calibration, intensity_image=calibration_image)
# Retrieve weighted centroids location and convert list to numpy array
centroids = np.array([region.centroid_weighted for region in regions_calibration])
print(f'Number of centroids detected: {len(centroids):,}'.replace(',', '\u2009'))Number of centroids detected: 16 447
The threshold value was manually selected to ensure the number of detected centroids matches the total lenslet count (16 447) as described in Section 2.2.
The centroid localization procedure is demonstrated in Figure 3 using a representative calibration spot. The figure displays three frames: the first showing a magnified view of an individual calibration spot, the second showing the binary mask generated through thresholding, and the third showing the binary mask after a morphological dilation operation. The calculated weighted centroid position, determined by applying the dilated mask to the original intensity distribution of the spot, is indicated by a pink cross marker.
2.4 Reshaping
The next step is to reshape the 2D light-field image into a 4D light-field array as a function of U and V (the spatial coordinates), and S and T (the angular coordinates). Each centroid location from Section 2.3 has coordinates corresponding to a position in the S and T space. From this position, a circular U and V map is extracted from the light-field image for each centroid. An interpolation step is introduced to have the centroid aligned on the pixel grid. A masking operation is performed to remove information from neighbouring lenslets.
Code
from scipy.interpolate import interpn
def get_roi(image, center, half_size):
"""
Extract a square Region of Interest (ROI) from an image.
Parameters:
-----------
image : ndarray
The input image from which to extract the ROI.
center : tuple of int
The (x, y) coordinates of the center point of the ROI as integer indices.
half_size : int
Half the size of the ROI. The total width/height will be (2*half_size+1).
Returns:
--------
ndarray
A square sub-image centered at the specified coordinates with
dimensions (2*half_size+1) × (2*half_size+1).
"""
x, y = center
return image[x-half_size:x+half_size+1, y-half_size:y+half_size+1]
# Load raw light-field image
raw_light_field_image = tifffile.imread('127x127_mla.tif')
# Hard-coded UV map radius
uv_radius = 7
# Hard-coded margin (for interpolation)
margin = 3
# Initialize integer grid
integer_grid = np.arange(-(uv_radius+margin), uv_radius+margin+1)
u_int_grid, v_int_grid = np.meshgrid(integer_grid, integer_grid)
# Create circular mask
circular_mask = u_int_grid**2 + v_int_grid**2 <= uv_radius**2
# Initialize light-field array
uv_diameter = 2*uv_radius+1
light_field_array = np.zeros((len(centroids), uv_diameter, uv_diameter))
# Loop over the number of detected centroids
for ii, centroid in enumerate(centroids):
# Rounded centroid location
rounded_centroid = centroid.astype(int)
# Calculate centroid grid offset
offset = centroid - rounded_centroid
# Create new grids for interpolation
u_grid, v_grid = np.meshgrid(integer_grid+offset[0], integer_grid+offset[1])
# Extract ROI around the centroid
uv_roi = get_roi(raw_light_field_image, rounded_centroid, uv_radius+margin)
# Interpolate the ROI to the offset grid
interpolated_roi = interpn(
(integer_grid, integer_grid),
uv_roi,
(v_grid, u_grid),
bounds_error=False,
fill_value=None
)
# Populate light-field array and perform masking operation (removing the margin)
light_field_array[ii, :, :] = (interpolated_roi*circular_mask)[margin:-margin, margin:-margin]The reshaping procedure is demonstrated in Figure 4 using a representative angular (U and V) space in the light-field image. The figure displays three frames: the first showing an angular space from the light-field image at the location of a rounded calibration spot, the second showing the same angular space interpolated at the exact location of the centroid (aligned with the pixel grid), and the third showing the circular masking to reject angular information from neighboring lenslets. The pink cross marker indicates the weighted centroid position. Yellow axes indicate the center of the ROI.